---
title: Build experiments
description: Build models in minutes, gain insights, compare results, then move your models into production.

---

# Build experiments {: #build-experiments }

DataRobot takes the data you provide, generates multiple machine learning models, and recommends the best model to put into production. With your domain knowledge and DataRobot's programmatic AI expertise, you will have successful models that solve real-world problems&mdash;in minutes!

## 1: Create a Use Case {: #create-a-use-case }

From the Workbench directory, click **Create Use Case** in the upper right:

![](images/wb-uc-0.png)

Provide a name for the use case and click the check mark to accept. You can change this name at any time by opening the use case and clicking on the existing name:

![](images/wb-uc-6.png)

From there you can then [build](wb-build-usecase) and modify, manage, and share the Use Case with others.

## 2: Create an experiment and add data {: #create-an-experiment-and-add-data }

After you create a use case from one of the many available data sources, it is time to create an experiment to start building models. Each Workbench experiment is a set of parameters (data, targets, and modeling settings) that you can compare to find the optimal models to solve your business problem:

![](images/wb-exp-1.png)

Add data to the new experiment, either by [adding new data](gs-wb-data) (1) or selecting a dataset that has already been loaded to the Use Case (2).

![](images/wb-exp-2.png)

## 3: Set the target and start modeling {: #set-the-target-and-start-modeling }

Once you have proceeded to target selection, Workbench prepares the dataset for modeling ([EDA 1](eda-explained#eda1){ target=_blank }). When the process finishes, set the target either by:

=== "Hover on feature name"

	Scroll through the list of features to find your target. If it is not showing, expand the list from the bottom of the display:

	![](images/wb-exp-7.png)

	Once located, click the entry in the table to use the feature as the target.

	![](images/wb-exp-8.png)

=== "Enter target name"

	Type the name of the target feature you would like to predict in the entry box. DataRobot lists matching features as you type:

	![](images/wb-exp-9.png)

Once a target is entered, Workbench displays a histogram providing information about the target feature's distribution and, in the right pane, a summary of the experiment settings.

![](images/wb-exp-10.png)

From here, you are ready to build models with the default settings. Or, you can [modify the default settings](ml-experiment-create#customize-settings) and then begin. If using the default settings, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } Autopilot modeling process.

## 4: Evaluate models {: #evaluate-models }

Once you start modeling, Workbench begins to construct your model Leaderboard, a list of models ranked by performance to help with quick model evaluation. The Leaderboard provides a summary of model information, including scoring information, for each model built in an experiment. From the Leaderboard, you can click a model to access visualizations for further exploration. Using these tools can help to assess what to do in your next experiment.

![](images/wb-exp-eval-2.png)

After Workbench completes [Quick mode](model-data#modeling-modes-explained){ target=_blank } on the 64% sample size phase, the most accurate model is selected and trained on 100% of the data. That model is marked with the [**Prepared for Deployment**](model-rec-process#prepare-a-model-for-deployment){ target=_blank } badge.

For all Leaderboard models, you can view model insights to help interpret, explain, and validate what drives a model’s predictions. Available insights are dependent on experiment type, but may include:

* [Feature Impact](ml-experiment-evaluate#feature-impact)
* [Feature Effects](ml-experiment-evaluate#feature-effects)
* [Blueprint](ml-experiment-evaluate#blueprint)
* [ROC Curve](ml-experiment-evaluate#roc-curve)
* [Lift Chart](ml-experiment-evaluate#lift-chart)
* [Residuals](ml-experiment-evaluate#residuals)
* [Accuracy Over Time](ml-experiment-evaluate#accuracy-over-time)
* [Stability](ml-experiment-evaluate#stability)

From the Leaderboard, you can also [generate compliance documentation](ml-experiment-evaluate#compliance-documentation) and [train a model on new settings](ml-experiment-add#train-on-new-settings).

## 5: Make predictions {: #make-predictions }

After you create an experiment and train models, you can make predictions to validate those models. Select the model from the **Models** list and then click **Model actions > Make predictions**.

![](images/wb-model-action-pred.png)

On the **Make Predictions** page, upload a **Prediction source**:

![](images/wb-pred-source.png)

After you upload a prediction source, you can [configure the prediction options and make predictions](wb-predict).

## Next steps {: #next-steps }

From here, you can:

* [Create DataRobot Notebooks](gs-code/index).
* [Leverage AI Accelerators](accelerators/index).
